Place your ads here email us at info@blockchain.news
NEW
multimodal understanding AI News List | Blockchain.News
AI News List

List of AI News about multimodal understanding

Time Details
2025-06-26
18:16
Gemma 3n: High-Performance Open Source AI Model for Edge Devices with Single GPU/TPU Support

According to Demis Hassabis on Twitter, the newly released open source Gemma 3n model stands out as the most powerful AI model that can run efficiently on a single GPU or TPU. Gemma 3n delivers advanced multimodal understanding and is optimized for edge computing due to its low memory requirements—capable of operating with just 2GB of memory. This makes Gemma 3n a practical solution for developers building AI applications on resource-constrained devices. The model's open source nature and efficiency present significant business opportunities for industries seeking to deploy AI at the edge, including IoT, smart devices, and mobile applications (Source: @demishassabis, June 26, 2025).

Source
2025-06-26
16:49
Gemma 3B E4B AI Model Sets New Benchmark: 140+ Language Support, Multimodal Capabilities, and 1300+ Lmarena Score

According to @GoogleAI, the Gemma 3B E4B model is a significant breakthrough in the AI industry, supporting over 140 languages for text, 35 languages for multimodal understanding, and delivering major improvements in math, coding, and reasoning tasks. Notably, it is the first model under 10 billion parameters to surpass a 1300 score on the Lmarena AI benchmark, showcasing efficient performance and broad applicability for global, multilingual, and cross-domain AI solutions (source: @GoogleAI via Twitter, goo.gle/gemma-3n-general-ava).

Source
Place your ads here email us at info@blockchain.news